1. Scenario 1: Windows Azure Web and Worker Role Communications
Consider a scenario in which
you're designing an ecommerce web application in Windows Azure with a
Web Role front end and several Worker Roles for back-end processing
work. The Web Role instances continuously send purchase order
information to the Worker Roles for order processing. In this scenario,
you can use the Windows Azure Queue service to queue purchase order
messages for the Worker Roles, as shown in Figure 1.
In Figure 1,
Web Role instances 1 and 2 send orders to the order-processing queue.
Worker Roles 1 and 2 dequeue the order messages and process the orders.
Because not all orders have to be processed immediately, Worker Roles
can pick up from the queue only the orders that are ready to be
processed. This way, you can create an effective message communication
system between Web Roles and Worker Roles, taking advantage of the
scalable and highly available Queue service infrastructure. If the
order message size exceeds 8KB, you can store the message body in the
Blob service and pass a link to the blob as a queue message, as shown
in Figure 1. When the Worker Role dequeues the message, it can retrieve the contents of the order from the Blob service.
2. Scenario 2: Worker Role Load Distribution
Continuing Scenario 1,
depending on the volume of messages, you can either adjust the number
of queues or the number of instances of Worker Roles for processing
orders. For example, if you identify during your testing phase that one
Worker Role can process only ten orders at a time, you can configure
Worker Roles to pick up only ten messages from the queue. If the number
of messages in the queue keeps increasing beyond the number that Worker
Roles can process, you can create more instances of Worker Roles on
demand and increase the order-processing capacity. Similarly, if the
queue is under-utilized, you can reduce the Worker Role instances for
processing orders.
In this scenario, the Queue
service plays the role of capacity indicator. You can think of the
queues in the Queue service as indicators of the system's processing
capacity. You can also use this pattern to process scientific
calculations and perform business analysis. Figure 2 illustrates the Worker Role load-distribution scenario.
In Figure 2,
Worker Roles 1 through 3 can process average order loads. When the
number of orders backs up into the queue, you can spawn more Worker
Roles (4 through n) depending on demand and the need for overall order-processing capacity.
3. Scenario 3: Interoperable Messaging
Large enterprises
use applications from different vendors, and these applications seldom
interoperate with each other. An enterprise may end up buying an
expensive third-party tool that acts as the interoperability bridge
between these applications. Instead, the enterprise could use the Queue
service to send messages across the applications that don't
interoperate with each other naturally. The Queue service exposes a
REST API based on open standards. Any programming language or
application capable of Internet programming can send and receive
messages from the Windows Azure Queue service using the REST API. Figure 3
illustrates the use of the Queue service to interoperate between a
Java-based Sales application and a .NET-based CRM application.
3.1. Scenario 4: Guaranteed Processing
In Scenario 1, every order
needs guaranteed processing. Any loss in orders can cause financial
damage to the company. So, the Worker Roles and the Queue service must
make sure every order in the queue is processed. You can implement
guaranteed processing using four simple principles:
Set the visibilitytimeout parameter to a value large enough to last beyond the average processing time for the messages.
Set
the visibilitytimeout parameter to a value small enough to make the
message visible if message processing fails in a consumer (Worker Role)
or a consumer crashes.
Don't delete a message until it's processed completely.
Design
the message consumers (Worker Roles) to be idempotent (that is, they
should account for handling the same message multiple times without an
adverse effect on the application's business logic).
Figure 4 illustrates guaranteed message processing in the context of the order-processing example discussed in Scenario 1.
In Figure 4, two Web Roles create orders, and three Worker Roles process orders. Consider the following steps:
Worker
Role 1 reads order O1 for processing. Worker Roles typically take 15
seconds to process an order. The visibilitytimeout for messages is set
to 60 seconds.
Worker Role 1 starts processing order O1. At this point, O1 isn't visible to other Worker Roles for 60 seconds.
Worker Role 1 crashes after 10 seconds.
After 60 seconds, O1 becomes visible again because Worker Role 1 wasn't able to delete it.
Worker Role 2 reads O1 and processes it.
After processing is complete, Worker Role 2 deletes O1 from the queue.
The important points to note
here are that Worker Role 1 didn't delete the message from the queue
before processing was complete, and the visibilitytimeout was set to an
appropriate time window to exceed the processing time of an order.